-
-
Notifications
You must be signed in to change notification settings - Fork 19.1k
CLN: use integer parsing functions from stdlib #62658
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
CLN: use integer parsing functions from stdlib #62658
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is much more difficult and less performant than it needs to be because we have a suboptimal structure for the parser.
Instead of storing a char **words
member, is there a way that we can define a struct like:
struct PandasParsedString {
const char *str;
size_t len;
int64_t[2] is_separator;
}
and then make the member in the parser a PandasParsedString *words
array. As you parse each word you can assign these struct members once, instead of having to query them after the fact.
I'm assuming the is_separator
member is a bit-packed struct signifying whether a particular character is a separator or not (see other comment about a 128 character limit). We could implement this a few other ways, but I figure this is the most memory efficient.
With that information up front you can avoid having to do a character search for separators here way after the fact; if there are no separators you can avoid looping altogether, but if there are you can move a region of characters in batches to the local buffer rather than having to go char by char
#define ERROR_NO_DIGITS 1 | ||
#define ERROR_OVERFLOW 2 | ||
#define ERROR_INVALID_CHARS 3 | ||
#define ERROR_NO_MEMORY 4 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's already a PARSER_OUT_OF_MEMORY define in this header - does this provide anything different?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I missed it. Since we will avoid using dynamic memory, I will delete it soon and the memory error won't be applicable.
pandas/_libs/src/parser/tokenizer.c
Outdated
} | ||
} | ||
|
||
char *start = malloc((chars_to_copy + 1) * sizeof(char)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For performance reasons we should avoid using the heap as much as possible. Given this only applies to numeric digits, I think it would be better to just stack allocate an array of a certain size.
Trying to be forward looking, we can probably get away with a local heap of size 128 bytes, since that covers the maximum character length of Arrow's Decimal256 (76), and sizes up to a power of 2. Anything longer that that can reasonably not be parsed as numeric
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For performance reasons we should avoid using the heap as much as possible. Given this only applies to numeric digits, I think it would be better to just stack allocate an array of a certain size.
Question: The approach of using a stack will not be thread safe. Is this a problem?
Another approach that I was thinking of was to use a do-while
loop, verifying if it stopped in tsep
. It will complicate the code a little bit, but will be thread safe and won't need dynamic memory.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Every thread gets its own stack, so the stack solution is thread safe while the heap is not. Perhaps you are mixing stack/local variables with the concept of static
?
Of course we aren't doing any multithreaded parsing here, but that's a great question and thing to consider
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Every thread gets its own stack, so the stack solution is thread safe while the heap is not.
Thanks, I didn't know that.
Perhaps you are mixing stack/local variables with the concept of static?
I first thought that I should make the array a global variable, that's the reason for the confusion.
I created a buffer
of size 128 in the stack to remove tsep
. With these changes, I removed the malloc
call and the memory error.
pandas/_libs/src/parser/tokenizer.c
Outdated
* @param p_item Pointer to verify | ||
* @return Non-zero integer indicating that has a digit 0 otherwise. | ||
*/ | ||
static inline int has_digit_int(const char *str) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
static inline int has_digit_int(const char *str) { | |
static inline bool has_digit_int(const char *str) { |
We are far enough removed from C89 that we can use the bool type :-)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done in 87789e6
pandas/_libs/src/parser/tokenizer.c
Outdated
/* Copy a string without `char_to_remove` into `output`, | ||
* while ensuring it's null terminated. | ||
*/ | ||
static void copy_string_without_char(char *output, const char *str, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
static void copy_string_without_char(char *output, const char *str, | |
static void copy_string_without_char(char output[PROCESSED_WORD_CAPACITY], const char *str, |
Don't think you need to pass this capacity as a separate argument, since it is always the same value
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done in 2287944
pandas/_libs/src/parser/tokenizer.c
Outdated
output[i++] = *src; | ||
} | ||
} | ||
if (i < output_size) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is pretty wasteful to continually write null bytes. You could either memset the buffer before you send it, or ideally short circuit when you've processed all the necessary bytes
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
continually write null bytes
Actually, it wasn't continually writing null bytes. Just once after copying src
. Anyway, I changed to use memset
.
pandas/_libs/src/parser/tokenizer.c
Outdated
static void copy_string_without_char(char *output, const char *str, | ||
char char_to_remove, size_t output_size) { | ||
size_t i = 0; | ||
for (const char *src = str; *src != '\0' && i < output_size; src++) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be great to avoid character-by-character writes if we can, especially since this is a relatively sparse character to search for.
You might be limited with what you can do without some of the larger changes I suggested, but maybe even just finding a region and writing multiple bytes at once will be more performant.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps something like:
size_t pos = 0;
const char* oldptr = p_item;
while (const char* newptr = strchr(oldptr, tsep) != NULL) {
size_t len = newptr - oldptr;
memcpy(output[pos], oldptr, len);
oldptr = newptr + 1;
pos += len;
}
output[startpos + 1] = '\0';
Might be an off by one and probably worth bounds checking. You might also want to use strchrnul instead of strchr to avoid a buffer overrun
Just some brief unchecked thoughts - hope they help
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I couldn't make strchr
work, because I couldn't know where the null terminator would be. I tried your suggestion in 2bea3c2.
I don't understand the error on MacOS, but they are definitely related to the changes here. They weren't happening when I was using malloc. I will try a different solution that don't rely on transforming the string. |
pandas/_libs/src/parser/tokenizer.c
Outdated
return 0; | ||
} | ||
} | ||
static inline int64_t add_int_check_overflow(int64_t lhs, int64_t rhs, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's no need to implement custom overflow detection. We already have macros that defer to compiler builtins (see the np_datetime.c module). You can refactor those if needed to make available here.
FWIW C23 also standardizes checked overflow functions. I'm not sure how far along MSVC is in implementing that, but for other platforms we can likely use the standard on newer compilers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The pinned meson-python version doesn't allow compiling with std=c23
. I don't know why stdckdint.h worked on my system with c11.
I will use the macro in np_datetime.c
to check for overflow, but I will move it to the header.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am still using my implementation for uint
overflow, because the NumPy implementation seems specific to int64
(at least for Windows).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The pinned meson-python version doesn't allow compiling with
std=c23
. I don't know why stdckdint.h worked on my system with c11.
You can specify multiple standards with Meson, and it will pick the first that the compiler supports. So can just set the option of c_std=c23,c11
I don't know why stdckdint.h worked on my system with c11
While stdckdint.h may not have been standardized until after c11, it doesn't exclude your compiler from using it if it has already implemented.
I am still using my implementation for
uint
overflow, because the NumPy implementation seems specific toint64
(at least for Windows).
Its unfortunate the MSVC implementation doesn't use a generic macro like the gcc/clang versions do to make these work regardless of size. However, there's the ULongLongAdd
function you can use instead of your own implementation:
https://learn.microsoft.com/en-us/windows/win32/api/intsafe/nf-intsafe-ulonglongadd
Generally opt for the compiler built-ins when available; they are almost assuredly faster, and mean less code we have to maintain :-)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can specify multiple standards with Meson, and it will pick the first that the compiler supports. So can just set the option of c_std=c23,c11
I got this error
../../meson.build:2:0: ERROR: Value "c11,c23" (of type "string") for combo option "C language standard to use" is not one of the choices. Possible choices are (as string): "none",
"c89", "c99", "c11", "c17", "c18", "c2x", "gnu89", "gnu99", "gnu11", "gnu17", "gnu18", "gnu2x".
It needs meson>=1.4
#define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION | ||
#endif // NPY_NO_DEPRECATED_API | ||
|
||
#if defined(_WIN32) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Its probably best to move these to the include/pandas/portable.h
header
_Static_assert(0, | ||
"Overflow checking not detected; please try a newer compiler"); | ||
#endif | ||
// __has_builtin was added in gcc 10, but our muslinux_1_1 build environment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure why this is a separate branch down here, but I think you can combine with the macros on lines 51/52
pandas/_libs/src/parser/tokenizer.c
Outdated
return true; | ||
case '+': | ||
case '-': | ||
return isdigit_ascii(str[1]); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will violate bounds checking if you pass a string of +
or -
pandas/_libs/src/parser/tokenizer.c
Outdated
return *str == '\0'; | ||
} | ||
|
||
static int power_int(int base, int exponent) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is int the right return type for this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mainly because I expect the result to be a small power of 10, up to 10**6
pandas/_libs/src/parser/tokenizer.c
Outdated
exponent /= 2; | ||
} | ||
|
||
return result * base; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks like it could overflow easily
pandas/_libs/src/parser/tokenizer.c
Outdated
((number == pre_max) && (d - '0' <= dig_pre_max))) { | ||
number = number * 10 + (d - '0'); | ||
d = *++p; | ||
errno = 0; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to check that the errno
isn't set already before just clearing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It doesn't seem necessary, this is just parsing a new number and error handling is done in parsers.pyx
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you can remove this now, right? Generally I'd advise against clearing errno
without good reason. This currently is equivalent to doing a try ... except Exception
in Python and blindly continuing along, so it should be avoided
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The errno should be reset. I added a comment explaining why. Mainly because strtoll
assign its value and I want to reset before calling it, so that parsing previous words don't pollute the verification below.
pandas/_libs/src/parser/tokenizer.c
Outdated
d = *++p; | ||
while (errno == 0 && tsep != '\0' && *endptr == tsep) { | ||
// Skip multiple consecutive tsep | ||
while (*endptr == tsep) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe I am misreading but its strange to see this in the invariant here and directly preceding it. Can we consolidate that logic somehow?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another option is to move endptr
to the next character and make sure it's a digit, if it's not, exit the loop.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That will probably read easier
pandas/_libs/src/parser/tokenizer.c
Outdated
return 0; | ||
} | ||
} | ||
char *new_end = NULL; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No need to assign NULL
pandas/_libs/src/parser/tokenizer.c
Outdated
while (isspace_ascii(*p)) { | ||
++p; | ||
ptrdiff_t digits = new_end - endptr; | ||
int64_t mul_result = power_int(10, (int)digits); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah OK - I think you are assembling this in pieces. However, I'd still suggest just getting rid of the separators in a local buffer and then just calling strtoll
(or whatever function) on the whole buffer. There's a lot of nuance to range/value handling that its not worth reimplementing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The problem with the local buffer is that there were failing tests on macOS that I couldn't reproduce. The malloc solution worked, but the local buffer on the stack was failing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That sounds like you were hitting some undefined behavior, but that would come from whatever the implementation was, not the general solution. We should definitely go back to that route and just see where we got stuck
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok. I will address some of your comments here, commit, and then revert to that state.
pandas/_libs/src/parser/tokenizer.c
Outdated
number = number * 10 + (d - '0'); | ||
d = *++p; | ||
errno = 0; | ||
char *endptr = NULL; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't assign NULL
pandas/_libs/src/parser/tokenizer.c
Outdated
size_t bytes_written = 0; | ||
|
||
while (left < end_ptr) { | ||
right = memchr(left, char_to_remove, end_ptr - left); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Its really pedantic (this is C, after all...) but ptrdiff_t is typically signed whereas size_t (which memchr expects) is unsigned. Additionally, while most systems will have the same width for these types, I don't know that is universally true.
At the very least this is going to generate a compiler warning, so I think still best to stick with a size_t argument if we can
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I updated the code to keep track of how many bytes it read. I also compiled with -Weverything
and made some casts to remove some warnings of -Wc++-compat
due to memchr
and -Wsign-conversion
due to implicit casting of ptrdiff_t
.
pandas/_libs/src/parser/tokenizer.c
Outdated
size_t remaining_bytes_to_read = str_len; | ||
|
||
while (remaining_bytes_to_read > 0) { | ||
right = (const char *)memchr(left, char_to_remove, remaining_bytes_to_read); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
right = (const char *)memchr(left, char_to_remove, remaining_bytes_to_read); | |
right = memchr(left, char_to_remove, remaining_bytes_to_read); |
While C++ requires this, in C its a bit strange
pandas/_libs/src/parser/tokenizer.c
Outdated
// We are also casting directly to size_t because | ||
// `left` never goes beyond `right` or `end_ptr`. | ||
size_t chunk_size = | ||
right ? (size_t)(right - left) : (size_t)(end_ptr - left); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
right ? (size_t)(right - left) : (size_t)(end_ptr - left); | |
right ? right - left : end_ptr - left; |
No point in casting these - the assignment will do it
pandas/_libs/src/parser/tokenizer.c
Outdated
bytes_written += chunk_size; | ||
|
||
// Advance past the removed character if we found it. | ||
left = right ? right + 1 : end_ptr; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When in a tight loop you really want to avoid branching; this has 3 different branches to check if right
is non-NULL - can we consolidate them?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I moved the remaining_bytes_to_read to the start of the loop. For me, the branching is quite clean, using only 2 ternary operators.
To consolidate it even further, I think it would need an explicit if-else
and a temporary variable to indicate where left should point to.
All these ternaries are mainly because I am avoiding left
pointing to end_ptr + 1
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All these ternaries are mainly because I am avoiding
left
pointing toend_ptr + 1
.
There's nothing wrong with this pointing to that location as long as you don't try to subsequently read or write it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now I reassign right to end_ptr if it's NULL.
pandas/_libs/src/parser/tokenizer.c
Outdated
const char *str, size_t str_len, | ||
char char_to_remove) { | ||
const char *left = str; | ||
const char *right; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since right
is only used inside of the loop you can move this therein (to the initial definition)
pandas/_libs/src/parser/tokenizer.c
Outdated
size_t bytes_written = 0; | ||
|
||
while (left < end_ptr) { | ||
size_t remaining_bytes_to_read = end_ptr - left; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
size_t remaining_bytes_to_read = end_ptr - left; | |
const size_t remaining_bytes_to_read = end_ptr - left; |
Always best to add const
unless there's a reason for mutability
pandas/_libs/src/parser/tokenizer.c
Outdated
right = end_ptr; | ||
} | ||
|
||
size_t chunk_size = right - left; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
size_t chunk_size = right - left; | |
const size_t chunk_size = right - left; |
pandas/_libs/src/parser/tokenizer.c
Outdated
|
||
size_t chunk_size = right - left; | ||
|
||
// check if we have enough space, including the null terminator. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the comments here and beyond are all superfluous, i.e. they add no value on top of what the code itself is showing
pandas/_libs/src/parser/tokenizer.c
Outdated
// Skip leading spaces. | ||
while (isspace_ascii(*p)) { | ||
++p; | ||
if (!p_item || *p_item == '\0') { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it necessary to add this? I don't think we should be special-casing behavior for null or the null byte; I expect the former is undefined (since its not a string) and the latter would be handled naturally by the rest of the logic (?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indeed, not necessary. The null byte is already handled below.
pandas/_libs/src/parser/tokenizer.c
Outdated
// Skip trailing spaces. | ||
while (isspace_ascii(*p)) { | ||
++p; | ||
if (errno == ERANGE) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is lost in a refactor right? I don't think errno will be set by anything preceding this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indeed, I forgot to remove it.
pandas/_libs/src/parser/tokenizer.c
Outdated
|
||
// Did we use up all the characters? | ||
if (*p) { | ||
char *endptr = NULL; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
char *endptr = NULL; | |
char *endptr; |
pandas/_libs/src/parser/tokenizer.c
Outdated
char *endptr = NULL; | ||
// strtoll sets errno if it finds an overflow. | ||
// It's value is reset to don't pollute the verification below. | ||
errno = 0; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As a matter of convention need to check errno
and "raise" if its set before clearing it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
By "raise" you mean printing to stderr?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
By "raise" I mean follow whatever exception handling is implemented. In this particular function, it looks like you should set the error
variable and return 0 when an exception is encountered
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You might want to give the CPython error handling doc a read, which most of the extensions base their methodology off of:
https://docs.python.org/3/c-api/exceptions.html#exception-handling
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am now resetting errno
after handling the overflow error
pandas/_libs/src/parser/tokenizer.c
Outdated
// Skip leading spaces. | ||
while (isspace_ascii(*p)) { | ||
++p; | ||
while (isspace_ascii(*p_item)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you removed the const char *p = p_item
assignment as a cleanup and not for any type of functionality, but if that's true its inflating the diff. Should move cleanup items like that to a separate PR
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've made several changes that complicate a lot the diff. I'll put back the const char *p
assignment, and also remove the need for some auxiliary functions created.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great - thanks!
It seems that Additionally, I think it would need other members for other parsers - at least for float, that would require the |
In C technically all "strings" are null-terminated, its just not a requirement that a
Probably worth a dedicated issue to discuss design ideas |
This was a suggestion from @WillAyd in #62623 (comment)
This simplifies the current parsing logic by utilizing existing functions to parse integers from stdlib (
strtoll
andstrtoull
) instead of using our own.One disadvantage of this approach is that in case there is a thousand separator (tsep
) we allocate memory on the heap and process the existing string to remove this separator.